The Jacobi method is the oldest method for EVD computations, dating back from 1864. The method does not require tridiagonalization. Instead, the method computes a sequence of orthogonally similar matrices which converge to a diagonal matrix of eigenvalues. In each step a simple plane rotation which sets one off-diagonal element to zero is performed.
For positive definite matrices, the method computes eigenvalues with high relative accuracy.
For more details, see I. Slapničar, Symmetric Matrix Eigenvalue Techniques and Z. Drmač, Computing Eigenvalues and Singular Values to High Relative Accuracy and the references therein.
The reader should be familiar with concepts of eigenvalues and eigenvectors, related perturbation theory, and algorithms.
The reader should be able to recognise matrices which warrant high relative accuracy and to apply Jacobi method to them.
$A$ is a real symmetric matrix of order $n$ and $A= U \Lambda U^T$ is its EVD.
The Jacobi method forms a sequence of matrices,
$$ A_0=A, \qquad A_{k+1}=G(c,s,i_k,j_k) A_k G(c,s,i_k,j_k)^T, \qquad k=1,2,\ldots, $$where $G(c,s,i_k,j_k)$ is the orthogonal plane rotation matrix. The parameters $c$ and $s$ are chosen such that
$$ [A_{k+1}]_{i_k j_k}=[A_{k+1}]_{j_k i_k}=0. $$The plane rotation is also called Jacobi rotation.
The off-norm of $A$ is
$$ \| A\|_{\mathrm{off}}=\big(\sum_{i}\sum_{j\neq i} a_{ij}^2\big)^{1/2}, $$that is, off-norm is the Frobenius norm of the matrix consisting of all off-diagonal elements of $A$.
The choice of pivot elements $[A_k]_{i_kj_k}$ is called the pivoting strategy.
The optimal pivoting strategy, originally used by Jacobi, chooses pivoting elements such that
$$ |[A_k]_{i_k j_k}|=\max_{i<j} |[A_k]_{ij}|. $$The row-cyclic pivoting strategy chooses pivot elements in the systematic row-wise order,
$$ (1,2), (1,3), \ldots,(1,n),(2,3), (2,4),\ldots,(2,n),(3,4),\ldots,(n-1,n). $$Similarly, the column-cyclic strategy chooses pivot elements column-wise.
One pass through all matrix elements is called cycle or sweep.
The Jacobi rotations parameters $c$ and $s$ are computed as follows: If $[A_k]_{i_kj_k}=0$, then $c=1$ and $s=0$, otherwise \begin{align*} & \tau=\frac{[A_k]_{i_ki_k}-[A_k]_{j_kj_k} }{2[A_k]_{i_kj_k} },\qquad t=\frac{\mathop{\mathrm{sign}}(\tau)}{|\tau|+\sqrt{1+\tau^2}},\\ & c=\frac{1}{\sqrt{1+t^2}},\qquad s=c\cdot t. \end{align*}
After each rotation, the off-norm decreases, $$ \|A_{k+1}\|_{\mathrm{off}}^2=\|A_{k}\|_{\mathrm{off}}^2-2[A_k]_{i_kj_k}^2. $$ With the appropriate pivoting strategy, the method converges in the sense that $$ \|A_{k}\|_{\mathrm{off}}\to 0,\qquad A_k\to\Lambda, \qquad \prod_{k=1}^{\infty} G(i_k,j_k,c,s)^T \to U. $$
For the optimal pivoting strategy the square of the pivot element is greater than the average squared element, $$ [A_k]_{i_kj_k}^2\geq \frac{1}{n(n-1)}\, \|A_k\|_{\mathrm{off}}^2 . $$ Thus, $$ \|A_{k+1}\|_{\mathrm{off}}^2\leq\left(1-\frac{2}{n(n-1)}\right)\|A_{k}\|_{\mathrm{off}}^2 $$ and the method converges.
For the row cyclic and the column cyclic pivoting strategies, the method converges. The convergence is ultimately quadratic in the sense that $$ \|A_{k+n(n-1)/2}\|_{\mathrm{off}} \leq\ const\cdot \|A_{k}\|_{\mathrm{off}}^2, $$ provided $\|A_{k}\|_{\mathrm{off}}$ is sufficiently small.
The EVD computed by the Jacobi method satisfies the standard error bounds.
The Jacobi method is suitable for parallel computation. There exist convergent parallel strategies which enable simultaneous execution of several rotations.
The Jacobi method is simple, but it is slower than the methods based on tridiagonalization. It is conjectured that standard implementations require $O(n^3\log n)$ operations. More precisely, each cycle clearly requires $O(n^3)$ operations and it is conjectured that $\log n$ cycles are needed until convergence.
If $A$ is positive definite, the method can be modified such that it reaches the speed of the methods based on tridiagonalization and at the same time computes the EVD with high relative accuracy.
$\begin{bmatrix} c & s\\-s& c\end{bmatrix}^T \begin{bmatrix} a & b\\ b & d\end{bmatrix} \begin{bmatrix} c & s\\-s& c\end{bmatrix} = \begin{bmatrix} \tilde a & 0 \\ 0 &\tilde b\end{bmatrix} $
In [31]:
using LinearAlgebra
function myJacobi(A::Array{T}) where T<:Real
n=size(A,1)
U=Matrix{T}(I,n,n)
# Tolerance for rotation
tol=sqrt(map(T,n))*eps(T)
# Counters
p=n*(n-1)/2
sweep=0
pcurrent=0
# First criterion is for standard accuracy, second one is for relative accuracy
while sweep<10 && norm(A-Diagonal(diag(A)))>tol
# while sweep<30 && pcurrent<p
sweep+=1
# Row-cyclic strategy
for i = 1 : n-1
for j = i+1 : n
# Check for the tolerance - the first criterion is standard,
# the second one is for relative accuracy for PD matrices
# if A[i,j]!=zero(T)
if abs(A[i,j])>tol*sqrt(abs(A[i,i]*A[j,j]))
# Compute c and s
τ=(A[i,i]-A[j,j])/(2*A[i,j])
t=sign(τ)/(abs(τ)+sqrt(1+τ^2))
c=one(T)/sqrt(one(T)+t^2)
s=c*t
G=LinearAlgebra.Givens(i,j,c,s)
A=G*A
A*=G'
A[i,j]=zero(T)
A[j,i]=zero(T)
U*=G'
pcurrent=0
# To observe convergence
# display(A)
else
pcurrent+=1
end
end
end
# display(A)
end
diag(A), U
end
Out[31]:
In [10]:
methodswith(LinearAlgebra.Givens);
In [11]:
import Random
Random.seed!(516)
n=4
A=Matrix(Symmetric(rand(n,n)))
Out[11]:
In [20]:
λ,U=myJacobi(A)
Out[20]:
In [21]:
# Orthogonality
U'*U
Out[21]:
In [22]:
# Residual
A*U-U*Diagonal(λ)
Out[22]:
In [23]:
# Positive definite matrix
n=100
A=rand(n,n)
A=Matrix(Symmetric(A'*A));
In [33]:
@time λ,U=myJacobi(A)
norm(U'*U-I),norm(A*U-U*Diagonal(λ))
Out[33]:
In [25]:
λ
Out[25]:
In [26]:
cond(A)
Out[26]:
In [27]:
# Now the standard QR method
λₛ,Uₛ=eigen(A);
In [29]:
norm(Uₛ'*Uₛ-I),norm(A*Uₛ-Uₛ*Diagonal(λₛ))
Out[29]:
myJacobi()
is accurate but very slow. Notice the extremely high memory allocation.
The two key elements to reducing the allocations are:
Here we will simply use the in-place multiplication routines which are in Julia denoted by !
.
In [34]:
@time eigen(A);
In [35]:
@time myJacobi(A);
In [41]:
function myJacobi(A1::Array{T}) where T<:Real
A=deepcopy(A1)
n=size(A,1)
U=Matrix{T}(I,n,n)
# Tolerance for rotation
tol=sqrt(map(T,n))*eps(T)
# Counters
p=n*(n-1)/2
sweep=0
pcurrent=0
# First criterion is for standard accuracy, second one is for relative accuracy
# while sweep<30 && norm(A-Diagonal(diag(A)))>tol
while sweep<30 && pcurrent<p
sweep+=1
# Row-cyclic strategy
for i = 1 : n-1
for j = i+1 : n
# Check for the tolerance - the first criterion is standard,
# the second one is for relative accuracy for PD matrices
# if A[i,j]!=zero(T)
if abs(A[i,j])>tol*sqrt(abs(A[i,i]*A[j,j]))
# Compute c and s
τ=(A[i,i]-A[j,j])/(2*A[i,j])
t=sign(τ)/(abs(τ)+sqrt(1+τ^2))
c=1/sqrt(1+t^2)
s=c*t
G=LinearAlgebra.Givens(i,j,c,s)
# A=G*A
lmul!(G,A)
# A*=G'
rmul!(A,adjoint(G))
A[i,j]=zero(T)
A[j,i]=zero(T)
# U*=G'
rmul!(U,adjoint(G))
pcurrent=0
else
pcurrent+=1
end
end
end
end
diag(A), U
end
Out[41]:
In [43]:
@time λ,U=myJacobi(A);
In [44]:
norm(U'*U-I),norm(A*U-U*Diagonal(λ))
Out[44]:
$A$ is a real symmetric PD matrix of order $n$ and $A=U\Lambda U^T$ is its EVD.
The scaled matrix of the matrix $A$ is the matrix $$ A_S=D^{-1} A D^{-1}, \quad D=\mathop{\mathrm{diag}}(\sqrt{A_{11}},\sqrt{A_{22}},\ldots,\sqrt{A_{nn}}). $$
The above diagonal scaling is nearly optimal (van der Sluis): $$ \kappa_2(A_S)\leq n \min\limits_{D=\mathrm{diag}} \kappa(DAD) \leq n\kappa_2(A). $$
Let $A$ and $\tilde A=A+\Delta A$ both be positive definite, and let their eigenvalues have the same ordering. Then $$ \frac{|\lambda_i-\tilde\lambda_i|}{\lambda_i}\leq \frac{\| D^{-1} (\Delta A) D^{-1}\|_2}{\lambda_{\min} (A_S)}\equiv \|A_S^{-1}\|_2 \| \Delta A_S\|_2. $$ If $\lambda_i$ and $\tilde\lambda_i$ are simple, then $$ \|U_{:,i}-\tilde U_{:,i}\|_2 \leq \frac{\| A_S^{-1}\|_2 \|\Delta A_S\|_2} {\displaystyle\min_{j\neq i}\frac{|\lambda_i-\lambda_j|}{\sqrt{\lambda_i\lambda_j}}}. $$ These bounds are much sharper than the standard bounds for matrices for which $\kappa_2(A_S)\ll \kappa_2(A)$.
The Jacobi method with the relative stopping criterion $$ |A_{ij}|\leq tol \sqrt{A_{ii}A_{jj}}, \quad \forall i\neq j, $$ and some user defined tolerance $tol$ (usually $tol=n\varepsilon$), computes the EVD with small scaled backward error $$ \|\Delta A_S\|\leq \varepsilon\, O(\|A_S\|_2)\leq O(n)\varepsilon, $$ provided that $\kappa_2([A_k]_S)$ does not grow much during the iterations. There is overwhelming numerical evidence that the scaled condition does not grow much, and the growth can be monitored, as well.
The proofs of the above facts are in J. Demmel and K. Veselić, Jacobi's method is more accurate than QR.
In [45]:
D=Diagonal([1,2,3,4,1000])
Out[45]:
In [48]:
Random.seed!(431)
n=6
A=rand(n,n)
A=Matrix(Symmetric(A'*A));
Aₛ=[A[i,j]/sqrt(A[i,i]*A[j,j]) for i=1:n, j=1:n]
Out[48]:
In [49]:
A
Out[49]:
In [50]:
cond(Aₛ), cond(A)
Out[50]:
In [51]:
# We add a strong scaling
D=exp.(50*(rand(n).-0.5))
Out[51]:
In [52]:
H=Diagonal(D)*Aₛ*Diagonal(D)
Out[52]:
In [53]:
# Now we scale again
Hₛ=[H[i,j]/sqrt(H[i,i]*H[j,j]) for i=1:n, j=1:n]
Out[53]:
In [54]:
cond(Hₛ),cond(H)
Out[54]:
In [55]:
# Jacobi method
λ,U=myJacobi(H)
Out[55]:
In [56]:
# Standard QR method
λ₁,U₁=eigen(H)
Out[56]:
In [64]:
# Compare
[sort(λ) sort(λ₁)]
Out[64]:
In [70]:
λ[1]
Out[70]:
In [67]:
sort(λ)-sort(λ₁)
Out[67]:
In [66]:
(sort(λ)-sort(λ₁))./sort(λ)
Out[66]:
In [71]:
# Check with BigFloat
λ₂,U₂=myJacobi(map(BigFloat,H))
λ₂
Out[71]:
In [69]:
# Relative error is eps()*cond(AS)
map(Float64,(sort(λ₂)-sort(λ))./sort(λ₂))
Out[69]:
Spectral absolute value of the matrix $A$ is the matrix
$$ |A|_{\mathrm{spr}}=(A^2)^{1/2}. $$This is positive definite part of the polar decomposition of $A$.
The above perturbation bounds for positive definite matrices essentially hold with $A_S$ replaced by $[|A|_{\mathrm{spr}}]_S$.
Jacobi method can be modified to compute the EVD with small backward error $\| \Delta [|A|_{\mathrm{spr}}]_S\|_2$.
The details of the indefinite case are beyond the scope of this course, and the reader should consider references.
In [ ]: